74 research outputs found

    Enhanced Syllable Discrimination Thresholds in Musicians

    Get PDF
    Speech processing inherently relies on the perception of specific, rapidly changing spectral and temporal acoustic features. Advanced acoustic perception is also integral to musical expertise, and accordingly several studies have demonstrated a significant relationship between musical training and superior processing of various aspects of speech. Speech and music appear to overlap in spectral and temporal features; however, it remains unclear which of these acoustic features, crucial for speech processing, are most closely associated with musical training. The present study examined the perceptual acuity of musicians to the acoustic components of speech necessary for intra-phonemic discrimination of synthetic syllables. We compared musicians and non-musicians on discrimination thresholds of three synthetic speech syllable continua that varied in their spectral and temporal discrimination demands, specifically voice onset time (VOT) and amplitude envelope cues in the temporal domain. Musicians demonstrated superior discrimination only for syllables that required resolution of temporal cues. Furthermore, performance on the temporal syllable continua positively correlated with the length and intensity of musical training. These findings support one potential mechanism by which musical training may selectively enhance speech perception, namely by reinforcing temporal acuity and/or perception of amplitude rise time, and implications for the translation of musical training to long-term linguistic abilities.Grammy FoundationWilliam F. Milton Fun

    Effects of noise exposure on young adults with normal audiograms II: Behavioral measures

    Get PDF
    An estimate of lifetime noise exposure was used as the primary predictor of performance on a range of behavioral tasks: frequency and intensity difference limens, amplitude modulation detection, interaural phase discrimination, the digit triplet speech test, the co-ordinate response speech measure, an auditory localization task, a musical consonance task and a subjective report of hearing ability. One hundred and thirty-eight participants (81 females) aged 18–36 years were tested, with a wide range of self-reported noise exposure. All had normal pure-tone audiograms up to 8 kHz. It was predicted that increased lifetime noise exposure, which we assume to be concordant with noise-induced cochlear synaptopathy, would elevate behavioral thresholds, in particular for stimuli with high levels in a high spectral region. However, the results showed little effect of noise exposure on performance. There were a number of weak relations with noise exposure across the test battery, although many of these were in the opposite direction to the predictions, and none were statistically significant after correction for multiple comparisons. There were also no strong correlations between electrophysiological measures of synaptopathy published previously and the behavioral measures reported here. Consistent with our previous electrophysiological results, the present results provide no evidence that noise exposure is related to significant perceptual deficits in young listeners with normal audiometric hearing. It is possible that the effects of noise-induced cochlear synaptopathy are only measurable in humans with extreme noise exposures, and that these effects always co-occur with a loss of audiometric sensitivity

    Beat synchronization across the lifespan: intersection of development and musical experience

    Get PDF
    Rhythmic entrainment, or beat synchronization, provides an opportunity to understand how multiple systems operate together to integrate sensory-motor information. Also, synchronization is an essential component of musical performance that may be enhanced through musical training. Investigations of rhythmic entrainment have revealed a developmental trajectory across the lifespan, showing synchronization improves with age and musical experience. Here, we explore the development and maintenance of synchronization in childhood through older adulthood in a large cohort of participants (N = 145), and also ask how it may be altered by musical experience. We employed a uniform assessment of beat synchronization for all participants and compared performance developmentally and between individuals with and without musical experience. We show that the ability to consistently tap along to a beat improves with age into adulthood, yet in older adulthood tapping performance becomes more variable. Also, from childhood into young adulthood, individuals are able to tap increasingly close to the beat (i.e., asynchronies decline with age), however, this trend reverses from younger into older adulthood. There is a positive association between proportion of life spent playing music and tapping performance, which suggests a link between musical experience and auditory-motor integration. These results are broadly consistent with previous investigations into the development of beat synchronization across the lifespan, and thus complement existing studies and present new insights offered by a different, large cross-sectional sample

    Evidence for Shared Cognitive Processing of Pitch in Music and Language

    Get PDF
    Language and music epitomize the complex representational and computational capacities of the human mind. Strikingly similar in their structural and expressive features, a longstanding question is whether the perceptual and cognitive mechanisms underlying these abilities are shared or distinct – either from each other or from other mental processes. One prominent feature shared between language and music is signal encoding using pitch, conveying pragmatics and semantics in language and melody in music. We investigated how pitch processing is shared between language and music by measuring consistency in individual differences in pitch perception across language, music, and three control conditions intended to assess basic sensory and domain-general cognitive processes. Individuals’ pitch perception abilities in language and music were most strongly related, even after accounting for performance in all control conditions. These results provide behavioral evidence, based on patterns of individual differences, that is consistent with the hypothesis that cognitive mechanisms for pitch processing may be shared between language and music.National Science Foundation (U.S.). Graduate Research Fellowship ProgramEunice Kennedy Shriver National Institute of Child Health and Human Development (U.S.) (Grant 5K99HD057522

    Musicians have fine-tuned neural distinction of speech syllables

    No full text
    One of the benefits musicians derive from their training is an increased ability to detect small differences between sounds. Here, we asked whether musicians’ experience discriminating sounds on the basis of small acoustic differences confers advantages in the subcortical differentiation of closely related speech sounds (e.g., /ba/ and /ga/), distinguishable only by their harmonic spectra (i.e., their second formant trajectories). Although the second formant is particularly important for distinguishing stop consonants, auditory brainstem neurons do not phase-lock to its frequency range (above 1000 Hz). Instead, brainstem neurons convert this high-frequency content into neural response timing differences. As such, speech tokens with higher formant frequencies elicit earlier brainstem responses than those with lower formant frequencies. By measuring the degree to which subcortical response timing differs to the speech syllables /ba/, /da/, and /ga/ in adult musicians and nonmusicians, we reveal that musicians demonstrate enhanced subcortical discrimination of closely related speech sounds. Furthermore, the extent of subcortical consonant discrimination correlates with speech-in-noise perception. Taken together, these findings show a musician enhancement for the neural processing of speech and reveal a biological mechanism contributing to musicians’ enhanced speech perception in noise

    Examining Delayed Recall in Cochlear Implant Users Using the Montreal Cognitive Assessment, California Verbal Learning Test, Third Edition, and Item Specific Deficit Approach: Preliminary Results

    Get PDF
    Purpose: Recent studies using the Montreal Cognitive Assessment (MoCA) suggest delayed recall is challenging for cochlear implant (CI) users. To better understand the underlying processes associated with delayed recall in CI users, we administered the MoCA and the California Verbal Learning Test, Third Edition (CVLT-3), which provides a more comprehensive assessment of delayed recall ability. Methods: The MoCA and CVLT-3 were administered to 18 high-performing CI users. For the CVLT-3, both the traditional scoring and a newer scoring method, the Item-Specific Deficit Approach (ISDA), were employed. Results: The original MoCA score and MoCA delayed recall subtest score did not relate to performance on any CVLT-3 measures regardless of scoring metric applied (i.e., traditional or ISDA). Encoding performance for both the CVLT-3 and ISDA were related. Consolidation, which is only distinctly defined by the ISDA, related to CVLT-3 cued delay recall performance but not free delay recall performance. Lastly, ISDA retrieval only related to CVLT-3 measures when modified. Conclusion: Performance on the MoCA and CVLT-3 in a high performing CI patient population were not related. We demonstrate that the ISDA can be successfully applied to CI users for the quantification and characterization of delayed recall ability; however, future work addressing lower performing CI users, and comparing to normal hearing controls is needed to determine the extent of potential translational applications. Our work also indicates that a modified ISDA retrieval score may be beneficial for evaluating CI users although additional work addressing the clinical relevance of this is still needed. Copyright © 2021 Brumer, Elkins, Parada, Hillyer and Parbery-Clark.Open access journalThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]

    Frequency-dependent effects of background noise on subcortical response timing

    No full text
    The addition of background noise to an auditory signal delays brainstem response timing. This effect has been extensively documented using manual peak selection. Peak picking, however, is impractical for large-scale studies of spectrotemporally complex stimuli, and leaves open the question of whether noise-induced delays are frequency-dependent or occur across the frequency spectrum. Here we use an automated, objective method to examine phase shifts between auditory brainstem responses to a speech sound (/da/) presented with and without background noise. We predicted that shifts in neural response timing would also be reflected in frequency-specific phase shifts. Our results indicate that the addition of background noise causes phase shifts across the subcortical response spectrum (70–1000 Hz). However, this noise-induced delay is not uniform such that some frequency bands show greater shifts than others: low-frequency phase shifts (300–500 Hz) are largest during the response to the consonant-vowel formant transition (/d/), while high-frequency shifts (720–1000 Hz) predominate during the response to the steady-state vowel (/a/). Most importantly, phase shifts occurring in specific frequency bands correlate strongly with shifts in the latencies of the predominant peaks in the auditory brainstem response, while phase shifts in other frequency bands do not. This finding confirms the validity of phase shift detection as an objective measure of timing differences and reveals that this method detects noise-induced shifts in timing that may not be captured by traditional peak latency measurements

    Subcortical neural synchrony and absolute thresholds predict frequency discrimination independently

    Get PDF
    The neural mechanisms of pitch coding have been debated for more than a century. The two main mechanisms are coding based on the profiles of neural firing rates across auditory nerve fibers with different characteristic frequencies (place-rate coding), and coding based on the phase-locked temporal pattern of neural firing (temporal coding). Phase locking precision can be partly assessed by recording the frequency-following response (FFR), a scalp-recorded electrophysiological response that reflects synchronous activity in subcortical neurons. Although features of the FFR have been widely used as indices of pitch coding acuity, only a handful of studies have directly investigated the relation between the FFR and behavioral pitch judgments. Furthermore, the contribution of degraded neural synchrony (as indexed by the FFR) to the pitch perception impairments of older listeners and those with hearing loss is not well known. Here, the relation between the FFR and pure-tone frequency discrimination was investigated in listeners with a wide range of ages and absolute thresholds, to assess the respective contributions of subcortical neural synchrony and other age-related and hearing loss-related mechanisms to frequency discrimination performance. FFR measures of neural synchrony and absolute thresholds independently contributed to frequency discrimination performance. Age alone, i.e., once the effect of subcortical neural synchrony measures or absolute thresholds had been partialed out, did not contribute to frequency discrimination. Overall, the results suggest that frequency discrimination of pure tones may depend both on phase locking precision and on separate mechanisms affected in hearing loss
    corecore